336 research outputs found

    Cost-effectiveness of cerebrospinal biomarkers for the diagnosis of Alzheimer’s disease

    Get PDF
    Background: Accurate and timely diagnosis of Alzheimer’s disease (AD) is important for prompt initiation of treatment in patients with AD and to avoid inappropriate treatment of patients with false-positive diagnoses. Methods: Using a Markov model, we estimated the lifetime costs and quality-adjusted life-years (QALYs) of cerebrospinal fluid biomarker analysis in a cohort of patients referred to a neurologist or memory clinic with suspected AD who remained without a definitive diagnosis of AD or another condition after neuroimaging. Parametric values were estimated from previous health economic models and the medical literature. Extensive deterministic and probabilistic sensitivity analyses were performed to evaluate the robustness of the results. Results: At a 12.7% pretest probability of AD, biomarker analysis after normal neuroimaging findings has an incremental cost-effectiveness ratio (ICER) of 11,032perQALYgained.ResultsweresensitivetothepretestprevalenceofAD,andtheICERincreasedtoover11,032 per QALY gained. Results were sensitive to the pretest prevalence of AD, and the ICER increased to over 50,000 per QALY when the prevalence of AD fell below 9%. Results were also sensitive to patient age (biomarkers are less cost-effective in older cohorts), treatment uptake and adherence, biomarker test characteristics, and the degree to which patients with suspected AD who do not have AD benefit from AD treatment when they are falsely diagnosed. Conclusions: The cost-effectiveness of biomarker analysis depends critically on the prevalence of AD in the tested population. In general practice, where the prevalence of AD after clinical assessment and normal neuroimaging findings may be low, biomarker analysis is unlikely to be cost-effective at a willingness-to-pay threshold of $50,000 per QALY gained. However, when at least 1 in 11 patients has AD after normal neuroimaging findings, biomarker analysis is likely cost-effective. Specifically, for patients referred to memory clinics with memory impairment who do not present neuroimaging evidence of medial temporal lobe atrophy, pretest prevalence of AD may exceed 15%. Biomarker analysis is a potentially cost-saving diagnostic method and should be considered for adoption in high-prevalence centers

    Cost-effectiveness of cerebrospinal biomarkers for the diagnosis of Alzheimer\u27s disease

    Get PDF
    Background: Accurate and timely diagnosis of Alzheimer\u27s disease (AD) is important for prompt initiation of treatment in patients with AD and to avoid inappropriate treatment of patients with false-positive diagnoses. Methods: Using a Markov model, we estimated the lifetime costs and quality-adjusted life-years (QALYs) of cerebrospinal fluid biomarker analysis in a cohort of patients referred to a neurologist or memory clinic with suspected AD who remained without a definitive diagnosis of AD or another condition after neuroimaging. Parametric values were estimated from previous health economic models and the medical literature. Extensive deterministic and probabilistic sensitivity analyses were performed to evaluate the robustness of the results. Results: At a 12.7% pretest probability of AD, biomarker analysis after normal neuroimaging findings has an incremental cost-effectiveness ratio (ICER) of 11,032perQALYgained.ResultsweresensitivetothepretestprevalenceofAD,andtheICERincreasedtoover 11,032 per QALY gained. Results were sensitive to the pretest prevalence of AD, and the ICER increased to over 50,000 per QALY when the prevalence of AD fell below 9%. Results were also sensitive to patient age (biomarkers are less cost-effective in older cohorts), treatment uptake and adherence, biomarker test characteristics, and the degree to which patients with suspected AD who do not have AD benefit from AD treatment when they are falsely diagnosed. Conclusions: The cost-effectiveness of biomarker analysis depends critically on the prevalence of AD in the tested population. In general practice, where the prevalence of AD after clinical assessment and normal neuroimaging findings may be low, biomarker analysis is unlikely to be cost-effective at a willingness-to-pay threshold of $ 50,000 per QALY gained. However, when at least 1 in 11 patients has AD after normal neuroimaging findings, biomarker analysis is likely cost-effective. Specifically, for patients referred to memory clinics with memory impairment who do not present neuroimaging evidence of medial temporal lobe atrophy, pretest prevalence of AD may exceed 15%. Biomarker analysis is a potentially cost-saving diagnostic method and should be considered for adoption in high-prevalence centers

    Fagliazione normale attiva lungo il versante occidentale del Monte Morrone (Appennino Centrale, Italia)

    Get PDF
    L’Appennino Centrale è interessato da sistemi attivi di faglie normali potenzialmente responsabili di terremoti di elevata magnitudo (fino a 7). Alcuni forti terremoti storici avvenuti in questo settore di catena appenninica sono stati attribuiti all’attivazione di alcune di questi sistemi di faglia, mediante analisi paleosismologiche e il confronto fra la distribuzione del danneggiamento associato a tali eventi sismici e la distribuzione spaziale delle faglie attive. Ad alcune di queste strutture tettoniche attive, invece, non è possibile associare alcun evento sismico storico noti da catalogo e per questo esse vengono considerate come strutture sismogenetiche silenti. Pertanto, a queste faglie è comunemente attribuita un’elevata pericolosità sismica. Il presente studio è mirato a caratterizzare l’attività tardo-Quaternaria di una queste faglie silenti, nello specifico quella che borda il versante occidentale del Monte Morrone (nell’Appennino abruzzese), cercando di definirne 1) la cinematica, 2) il tasso di movimento e 3) la massima magnitudo attesa da un evento di attivazione. Le analisi (comprendenti rilevamento geologico, geomorfologico e strutturale, nonché datazioni al 14C e determinazioni tefrostratigrafiche) effettuate lungo l’espressione in superficie di questa struttura tettonica, costituita da due segmenti di faglia paralleli, orientati NW-SE, hanno permesso di confermare che essa è prevalentemente caratterizzata da una cinematica normale, con una minore componente obliqua sinistra. Tale cinematica sarebbe consistente con un’estensione orientata circa N 20°. Il tasso di movimento del segmento di faglia occidentale è stato definito mediante l’individuazioni di depositi (prevalentemente conoidi alluvionali), cronologicamente vincolati, dislocati dall’attività di tale segmento. Lo slip rate è risultato essere dell’ordine di 0.4±0.07 mm/anno. Per quanto concerne il segmento orientale, la sua attività tardopleistocenica – olocenica è indicata dalla dislocazione lungo di esso di depositi di versante attribuiti all’UMG. Tuttavia, la mancanza di sedimenti e/o morfologie coevi nel blocco di letto ha impedito di valutare il tasso di movimento di questo segmento. Tuttavia, le analisi geologico-strutturali effettuate, unite ad una revisione critica della letteratura disponibile sui modelli evolutivi dei sistemi di faglie normali, hanno permesso di ipotizzare per il segmento di faglia orientale uno tasso di movimento >0 ma inferiore a quello definito per il segmento occidentale, ossia <0.4±0.07 mm/anno. Questo consente di definire per l’intero sistema di faglie del Monte Morrone uno slip rate compreso fra 0.4±0.07 e 0.8±0.09 mm/anno. Infine, applicando le equazioni empiriche proposte da Wells e Coppersmith (1994) che legano la magnitudo momento e i) la lunghezza in superficie della struttura tettonica e ii) il rigetto (massimo e medio) per evento di attivazione – considerando un tempo di ricorrenza di circa 2000anni – è stato possibile definire che la massima magnitudo attesa da un terremoto originato lungo il sistema di faglie normali del Monte Morrone (lungo circa 23 km) è dell’ordine di 6.6-6.7

    Adverse Outcome of Early Recurrent Ischemic Stroke Secondary to Atrial Fibrillation after Repeated Systemic Thrombolysis

    Get PDF
    Background. Recurrent ischemic stroke is associated with adverse neurological outcome in patients with atrial fibrillation. There is very scarce information regarding the neurological outcome of atrial fibrillation patients undergoing repeated systemic thrombolysis after early recurrent ischemic stroke. Clinical Case and Discussion. We describe a case of a 76-year-old woman with known paroxysmal atrial fibrillation who was admitted because of an acute right middle cerebral artery ischemic stroke and who underwent repeated systemic thrombolysis within 110 hours. The patient underwent systemic thrombolysis after the first ischemic stroke with almost complete neurological recovery. On the fourth day after treatment, an acute left middle cerebral artery ischemic stroke was diagnosed and she was treated with full-dose intravenous recombinant tissue plasminogen activator. A hemorrhagic transformation of the left middle cerebral artery infarction was noted on follow-up cranial computed tomographic scans. The patient did not recover from the second cerebrovascular event and died 25 days after admission. Conclusion. To the best of our knowledge, this is the second case reporting the adverse neurological outcome of a patient with diagnosis of atrial fibrillation undergoing repeated systemic thrombolysis after early recurrent ischemic stroke. Our report represents a contribution to the scarce available evidence suggesting that repeated systemic thrombolysis for recurrent ischemic stroke should be avoided

    Using Self-Organizing Maps for the Behavioral Analysis of Virtualized Network Functions

    Get PDF
    Detecting anomalous behaviors in a network function virtualization infrastructure is of the utmost importance for network operators. In this paper, we propose a technique, based on Self-Organizing Maps, to address such problem by leveraging on the massive amount of historical system data that is typically available in these infrastructures. Indeed, our method consists of a joint analysis of system-level metrics, provided by the virtualized infrastructure monitoring system and referring to resource consumption patterns of the physical hosts and the virtual machines (or containers) that run on top of them, and application-level metrics, provided by the individual virtualized network functions monitoring subsystems and related to the performance levels of the individual applications. The implementation of our approach has been validated on real data coming from a subset of the Vodafone infrastructure for network function virtualization, where it is currently employed to support the decisions of data center operators. Experimental results show that our technique is capable of identifying specific points in space (i.e., components of the infrastructure) and time of the recent evolution of the monitored infrastructure that are worth to be investigated by human operators in order to keep the system running under expected conditions

    Co/Ni element ratio in the galactic cosmic rays between 0.8 and 4.3 GeV/nucleon

    Get PDF
    In a one-day balloon flight of the Trans-Iron Galactic Element Recorder (TIGER) in 1997, the instrument achieved excellent charge resolution for elements near the Fe peak, permitting a new measurement of the element ratio Co/Ni. The best fit to the data, extrapolated to the top of the atmosphere, gives an upper limit for this ratio of 0.093±0.037 over the energy interval 0.8 to 4.3 GeV/nucleon; because a Co peak is not seen in the data, this result is given as an upper limit. Comparing this upper limit with calculations by Webber & Gupta suggests that at the source of these cosmic rays a substantial amount of the electron-capture isotope 59Ni survived. This conclusion is in conflict with the clear evidence from ACE/CRIS below 0.5 GeV/nucleon that there is negligible 59Ni surviving at the source. Possible explanations for this apparent discrepancy are discussed

    Behavioral analysis for virtualized network functions: A som-based approach

    Get PDF
    In this paper, we tackle the problem of detecting anomalous behaviors in a virtualized infrastructure for network function virtualization, proposing to use self-organizing maps for analyzing historical data available through a data center. We propose a joint analysis of system-level metrics, mostly related to resource consumption patterns of the hosted virtual machines, as available through the virtualized infrastructure monitoring system, and the application-level metrics published by individual virtualized network functions through their own monitoring subsystems. Experimental results, obtained by processing real data from one of the NFV data centers of the Vodafone network operator, show that our technique is able to identify specific points in space and time of the recent evolution of the monitored infrastructure that are worth to be investigated by a human operator in order to keep the system running under expected conditions

    Fagliazione normale attiva e deformazioni gravitative profonde di versante: il caso del versante occidentale del Monte Morrone (Appennino Centrale, Italia)

    Get PDF
    Questo lavoro ha l’obiettivo di indagare la relazione fra l’attività tettonica e l’innesco di deformazioni gravitative profonde lungo i versanti montuosi. In base alla letteratura esistente, la tettonica può svolgere un duplice ruolo nell’influenzare l’evoluzione in senso gravitativo dei versanti: i) un ruolo passivo, legato all’influenza sull’assetto strutturale dei versanti che può essere ereditato da una fase tettonica non più attiva; ii) un ruolo attivo, rappresentato dalle modifiche che essa può determinare sui versanti, producendo incrementi dell’energia del rilievo e dello stress tensionale subito dai volumi di roccia. In quest’ottica è stato effettuato uno studio lungo il versante occidentale del Monte Morrone, rilievo che delimita ad oriente il bacino di Sulmona, nell’Appennino abruzzese, e che costituisce un’anticlinale da thrust formatasi durante il Mio-Pliocene. Questo versante del rilievo è interessato da un sistema attivo di faglie normali (orientato NW-SE), lungo circa 23 km, costituito da due segmenti di faglia paralleli, uno localizzato nel settore intermedio del versante e uno localizzato alla base del rilievo. Lungo questo versante sono state riconosciute in passato alcune morfologie – quali trincee, depressioni allungate, scarpate in contropendenza – indicanti l’occorrenza di movimenti gravitativi profondi (tipo sackung). Sono state condotte osservazioni geomorfologico-strutturali atte a mappare tutti gli elementi morfologici legati ai movimenti gravitativi profondi. Sono stati altresì realizzati 4 scavi geognostici all’interno di due trincee gravitative per cercare di ottenere elementi utili alla caratterizzazione dell’evoluzione recente di questi fenomeni gravitativi (Fig.1). Le analisi condotte hanno permesso di definire che tali fenomeni gravitativi di grande scala sono determinati dall’incremento dell’energia del rilievo prodotta dall’attività del segmento di faglia occidentale. La faglia orientale, invece, viene esclusivamente utilizzata, nella sua porzione più superficiale, come superficie di scivolamento delle masse rocciose. L’innesco dei fenomeni gravitativi sarebbe dunque avvenuto successivamente all’inizio dell’attività del segmento di faglia occidentale che, secondo Gori et al. (2007), avrebbe avuto luogo in un momento successivo all’attivazione del segmento orientale, dopo il Pleistocene Inferiore. Questo quadro evolutivo è suggerito dal fatto che la formazione di alcune delle trincee gravitative ha dislocato brecce di versante attribuite al Pleistocene Inferiore. Queste, infatti, si sono depositate su un paleo-paesaggio, attualmente individuabile fra i due segmenti di faglia, sospeso sulla piana attuale, che era localizzato alla base della scarpata di faglia relativa al segmento orientale, quando quello occidentale non era ancora attivo. La realizzazione di scavi geognostici all’interno di due trincee gravitative ha permesso di individuare la dislocazione dei depositi di riempimento lungo le scarpate che delimitano tali depressioni e lungo piani di taglio gravitativi secondari. I depositi messi alla luce dagli scavi sono prevalentemente costituiti da detrito di versante, sedimenti di origine colluviali e paleosuoli. Datazioni radiometriche effettuate su campioni di materiale organico prelevato dai paleosuoli e su frammenti di carbone contenuti nelle unità colluviali, indicano che i movimenti lungo le scarpate delle trincee è proseguito anche nel corso del tardo Olocene, nello specifico successivamanete a 10660-10540 cal. a.C./10430-9910 cal. a.C.. Questo indicherebbe che le deformazioni gravitative che interessano il versante occidentale del Monte Morrone possono considerarsi attive. Infine, anche se non sono state riconosciute chiare evidenze che mettano in relazione eventi di attivazione del sistema di faglie normali del M. Morrone con episodi di accelerazione dei movimenti gravitativi, questo non può essere escluso e, anzi, deve essere considerato come probabile

    Time intervals to assess active and capable faults for engineering practices in Italy

    Get PDF
    The time span necessary to define a fault as ‘active and capable’ can mainly be derived from the framework of the regulations and the literature produced since the 1970s on risk estimation in engineering planning of strategic buildings. Within this framework, two different lines of thought can be determined, which have mainly developed in the USA. On the one side, there is a tendency to produce ‘narrow’ chronological definitions. This is particularly evident in the regulatory acts for the planning of nuclear reactors. The much more effective second line of thought anchors the chronological definitions of the terms ‘active’ and, therefore ‘capable’, to the concept of ‘seismotectonic domain’. As the domains are different in different regions of the World, the chronological definition cannot be univocal; i.e., different criteria are needed to define fault activity, which will depend on the characteristics of the local tectonic domain and of the related recurrence times of fault activation. Current research on active tectonics indicates that methodological aspects can also condition the chronological choice to define fault activity. Indeed, this practice implies the use of earth science methods, the applications of which can be inherently limited. For example, limits and constraints might be related to the availability of datable sediments and landforms that can be used to define the recent fault kinematic history. For the Italian territory, we consider two main tectonic domains: (a) the compressive domain along the southern margin of the Alpine chain and the northern and northeastern margins of the Apennines, which is characterised by the activity of blind thrusts and reverse faults; and (b) the extensional domain of the Apennines and the Calabria region, which is often manifest through the activity of seismogenic normal and normal-oblique faults. In case (a), the general geomorphic and subsurficial evidence of recent activity suggests that a reverse blind fault or a blind thrust should be considered active and potentially capable if showing evidence of activity during the Quaternary (i.e., over the last 2.6 Myr), unless information is available that documents its inactivity since at least the Last Glacial Maximum (LGM) (ca. 20 ka). The choice of the LGM period as the minimum age necessary to define fault inactivity is related to practical aspects (the diffusion of the LGM deposits and landforms) and to the evidence that ca. 20 kyr to assess fault inactivity precautionarily includes a number of seismic cycles. In the extensional domains of the Apennines and Calabria region, the general geological setting suggests that the present tectonic regime has been active since the beginning of the Middle Pleistocene. Therefore, we propose that a normal fault in the Italian extensional domain should be considered active and capable if it displays evidence of activation in the last 0.8 Myr, unless it is sealed by deposits or landforms not younger than the LGM. The choice of the LGM as the minimum age to ascertain fault inactivity follows the same criteria described for the compressive tectonic domain
    • …
    corecore